Infinite-dimensional gradient-based descent for alpha-divergence minimisation

نویسندگان

چکیده

This paper introduces the (?,?)-descent, an iterative algorithm which operates on measures and performs ?-divergence minimisation in a Bayesian framework. gradient-based procedure extends commonly-used variational approximation by adding prior parameters form of measure. We prove that for rich family functions ?, this leads at each step to systematic decrease derive convergence results. Our framework recovers Entropic Mirror Descent provides alternative we call Power Descent. Moreover, its stochastic formulation, (?,?)-descent allows optimise mixture weights any given model without information underlying distribution parameters. renders our method compatible with many choices updates applicable wide range Machine Learning tasks. demonstrate empirically both toy real-world examples benefit using going beyond framework, fails as dimension grows.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Stochastic Gradient Descent Algorithm for Structural Risk Minimisation

Structural risk minimisation (SRM) is a general complexity regularization method which automatically selects the model complexity that approximately minimises the misclassification error probability of the empirical risk minimiser. It does so by adding a complexity penalty term (m, k) to the empirical risk of the candidate hypotheses and then for any fixed sample size m it minimises the sum wit...

متن کامل

Stochastic Particle Gradient Descent for Infinite Ensembles

The superior performance of ensemble methods with infinite models are well known. Most of these methods are based on optimization problems in infinite-dimensional spaces with some regularization, for instance, boosting methods and convex neural networks use L1-regularization with the non-negative constraint. However, due to the difficulty of handling L1-regularization, these problems require ea...

متن کامل

InfiniteBoost: building infinite ensembles with gradient descent

In machine learning ensemble methods have demonstrated high accuracy for the variety of problems in different areas. The most known algorithms intensively used in practice are random forests and gradient boosting. In this paper we present InfiniteBoost — a novel algorithm, which combines the best properties of these two approaches. The algorithm constructs the ensemble of trees for which two pr...

متن کامل

infinite dimensional garch models

مدلهای گارچ در فضاهای هیلبرت پایان نامه حاضر شامل دو بخش می باشد. در قسمت اول مدلهای اتورگرسیو تعمیم یافته مشروط به ناهمگنی واریانس در فضاهای هیلبرت را معرفی، مفاهیم ریاضی مورد نیاز در تحلیل این مدلها در دامنه زمان را مطرح کرده و آنها را مورد بررسی قرار می دهیم. بر اساس پیشرفتهایی که اخیرا در زمینه تئوری داده های تابعی و آماره های عملگری ایجاد شده است، فرآیندهایی که دارای مقادیر در فضاهای ...

15 صفحه اول

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Annals of Statistics

سال: 2021

ISSN: ['0090-5364', '2168-8966']

DOI: https://doi.org/10.1214/20-aos2035